扬声器验证系统已被广泛用于智能手机和物联网设备以识别合法用户。在最近的工作中,已经表明,诸如FakeBob之类的对抗性攻击可以有效地针对说话者验证系统。本文的目的是设计一个可以将原始音频与受对抗攻击污染的音频区分开的检测器。具体而言,我们设计的检测器(称为MEH-Fest)从音频的短时傅立叶变换中计算出高频的最小能量,并将其用作检测度量。通过分析和实验,我们表明我们提出的检测器易于实施,快速处理输入音频,并有效地确定音频是否被假屁股攻击损坏。实验结果表明,检测器非常有效:在高斯混合物模型(GMM)和I-vector Speaker验证系统中检测假雄性攻击的情况接近零的假阳性和假阴性率。此外,讨论和研究了对我们提议的探测器的适应性对抗性攻击,并研究了他们的对策,展示了攻击者和后卫之间的比赛。
translated by 谷歌翻译
最近的顺序推荐模型越来越多地依赖连续的短期用户相互作用序列来建模用户兴趣。但是,这些方法引起了人们对短期和长期利益的关注。 (1){\ IT短期}:交互序列可能不是由单一的兴趣引起的,而是来自几个相互交织的利益,即使在短时间内,也导致了它们无法模拟Skip行为的失败; (2){\ it长期}:相互作用序列主要是在离散的间隔内稀疏观察,而不是长期连续的。这使得难以推断长期利益,因为只能考虑到跨序列的利益动态,因此只能得出离散的利息表示。在这项研究中,我们通过学习来解决这些问题(1)短期利益的多尺度表示; (2)长期利益的动态意识表示。为此,我们提出了一个\ textbf {i} nterest \ textbf {d} ynamics建模框架,使用生成\ textbf {n} eural \ textbf {p textbf {p} rocesses,coincined IDNP,以从功能角度来看,以模拟用户兴趣。 IDNP学习了一个全球兴趣函数家族,以定义每个用户的长期兴趣作为功能实例化,从而通过功能连续性表现出兴趣动态。具体而言,IDNP首先将每个用户的短期交互编码为多尺度表示,然后将其汇总为用户上下文。通过将潜在的全球兴趣与用户上下文相结合,IDNP然后重建长期用户兴趣功能,并在即将到来的查询时间段上预测交互。此外,即使相互作用序列受到限制和非连续性,IDNP也可以建模此类兴趣功能。在四个现实世界数据集上进行的广泛实验表明,我们的模型在各种评估指标上的最先进。
translated by 谷歌翻译
最近的研究表明,使用两个阶段监督框架来生成描绘人类对脑电图视觉刺激的感知的图像,指的是脑电图的重建。但是,它们无法重现确切的视觉刺激,因为它是人类指定的图像注释,而不是其数据,它决定了合成的图像是什么。此外,合成的图像通常会遭受嘈杂的脑电图编码和对生成模型的不稳定培训,因此难以识别。取而代之的是,我们提出了一个单阶段的EEG视觉检索范式,其中两种模式的数据与他们的注释相反,使我们能够恢复EEG剪辑的确切视觉刺激。我们通过优化对比度的自我监督目标,最大程度地提高了脑电图编码和相关视觉刺激之间的相互信息,从而带来了两个额外的好处。第一,它使EEG编码能够处理训练期间可见的类别以外的视觉类别,因为学习并非针对课堂注释。此外,不再需要模型来生成视觉刺激的每个细节,而是专注于交叉模式对齐并在实例级别检索图像,从而确保可区分的模型输出。经验研究是对最大的单个主体EEG数据集进行的,该数据集测量了图像刺激引起的大脑活动。我们证明了所提出的方法完成了实例级的EEG视觉检索任务,现有方法无法进行。我们还研究了一系列脑电图和视觉编码器结构的含义。此外,对于大多数研究的语义级EEG视觉分类任务,尽管不使用班级注释,但该方法的表现优于最先进的监督EEG视觉重建方法,尤其是关于开放类识别的能力。
translated by 谷歌翻译
我们引入了来自多个机器人手的对象的神经隐式表示。多个机器人手之间的不同抓地力被编码为共享的潜在空间。学会了每个潜在矢量以两个3D形状的签名距离函数来解码对象的3D形状和机器人手的3D形状。此外,学会了潜在空间中的距离度量,以保留不同机器人手之间的graSps之间的相似性,其中根据机器人手的接触区域定义了grasps的相似性。该属性使我们能够在包括人手在内的不同抓地力之间转移抓地力,并且GRASP转移有可能在机器人之间分享抓地力,并使机器人能够从人类那里学习掌握技能。此外,我们隐式表示中对象和grasps的编码符号距离函数可用于6D对象姿势估计,并从部分点云中掌握触点优化,这可以在现实世界中启用机器人抓握。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译